22 research outputs found

    Representation of Gaussian fields in series with independent coefficients

    Get PDF
    The numerical discretization of problems with stochastic data or stochastic parameters generally involves the introduction of coordinates that describe the stochastic behaviour, such as coefficients in a series expansion or values at discrete points. The series expansion of a Gaussian field with respect to any orthonormal basis of its Cameron-Martin space has independent standard normal coefficients. A standard choice for numerical simulations is the Karhunen-Loève series, which is based on eigenfunctions of the covariance operator. We suggest an alternative, the hierarchic discrete spectral expansion, which can be constructed directly from the covariance kernel. The resulting basis functions are often well localized, and the convergence of the series expansion seems to be comparable to that of the Karhunen-Loève series. We provide explicit formulas for particular cases and general numerical methods for computing exact representations of such bases. Finally, we relate our approach to numerical discretizations based on replacing a random field by its values on a finite se

    Sparse tensor discretizations of high-dimensional parametric and stochastic PDEs

    Get PDF
    Partial differential equations (PDEs) with random input data, such as random loadings and coefficients, are reformulated as parametric, deterministic PDEs on parameter spaces of high, possibly infinite dimension. Tensorized operator equations for spatial and temporal k-point correlation functions of their random solutions are derived. Parametric, deterministic PDEs for the laws of the random solutions are derived. Representations of the random solutions' laws on infinite-dimensional parameter spaces in terms of ‘generalized polynomial chaos' (GPC) series are established. Recent results on the regularity of solutions of these parametric PDEs are presented. Convergence rates of best N-term approximations, for adaptive stochastic Galerkin and collocation discretizations of the parametric, deterministic PDEs, are established. Sparse tensor products of hierarchical (multi-level) discretizations in physical space (and time), and GPC expansions in parameter space, are shown to converge at rates which are independent of the dimension of the parameter space. A convergence analysis of multi-level Monte Carlo (MLMC) discretizations of PDEs with random coefficients is presented. Sufficient conditions on the random inputs for superiority of sparse tensor discretizations over MLMC discretizations are established for linear elliptic, parabolic and hyperbolic PDEs with random coefficient

    A convergent adaptive stochastic Galerkin finite element method with quasi-optimal spatial meshes

    Get PDF
    We analyze a-posteriori error estimation and adaptive refinement algorithms for stochastic Galerkin Finite Element methods for countably-parametric elliptic boundary value problems. A residual error estimator which separates the effects of gpc-Galerkin discretization in parameter space and of the Finite Element discretization in physical space in energy norm is established. It is proved that the adaptive algorithm converges, and to this end we establish a contraction property satisfied by its iterates. It is shown that the sequences of triangulations which are produced by the algorithm in the FE discretization of the active gpc coefficients are asymptotically optimal. Numerical experiments illustrate the theoretical results

    Representation of Gaussian fields in series with independent coefficients

    No full text
    ISSN:0272-4979ISSN:1464-364

    Adaptive wavelet methods for elliptic partial differential equations with random operators

    No full text
    We apply adaptive wavelet methods to boundary value problems with random coefficients, discretized by wavelets or frames in the spatial domain and tensorized polynomials in the parameter domain. Greedy algorithms control the approximate application of the fully discretized random operator, and the construction of sparse approximations to this operator. We suggest a power iteration for estimating errors induced by sparse approximations of linear operators
    corecore